36 research outputs found

    CASA 2009:International Conference on Computer Animation and Social Agents

    Get PDF

    Editorial

    Get PDF

    Generating a 3D Simulation of a Car Accident from a Written Description in Natural Language: the CarSim System

    Get PDF
    This paper describes a prototype system to visualize and animate 3D scenes from car accident reports, written in French. The problem of generating such a 3D simulation can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two modules, we first designed a template formalism to represent a written accident report. The CarSim system first processes written reports, gathers relevant information, and converts it into a formal description. Then, it creates the corresponding 3D scene and animates the vehicles.Comment: 8 pages, ACL 2001, Workshop on Temporal and Spatial Information Processin

    Emotion Capture: Emotionally Expressive Characters for Games

    Get PDF
    It has been shown that humans are sensitive to the portrayal of emotions for virtual characters. However, previous work in this area has often examined this sensitivity using extreme examples of facial or body animation. Less is known about how attuned people are at recognizing emotions as they are expressed during conversational communication. In order to determine whether body or facial motion is a better indicator for emotional expression for game characters, we conduct a perceptual experiment using synchronized full-body and facial motion-capture data. We find that people can recognize emotions from either modality alone, but combining facial and body motion is preferable in order to create more expressive characters

    Swift game programming for absolute beginners

    No full text

    Dialogs with BDP Agents in Virtual Environments

    No full text
    In this paper we discuss a multi-modal agent platform, based on the BDI paradigm. We now have environments where more than one agent is available and where there is a need for uniformity in agent design and agent interaction. This requires a more general and uniform approach to developing agent-oriented virtual environments that allow multi-modal interaction. We first focus on a formal model for conversational planning agents, for which we discuss the specification of beliefs, desires and conditional plans. Then we discuss the relation between natural language and the communication as it is defined for these agents. We also show how referential problems are treated in the process of interpreting the informational content in a dialog situation. The general agent framework will be used in the specification of agents in a virtual environment, that can interact in a multi-modal way with others

    Perception of Approach and Reach in Combined Interaction Tasks

    No full text
    Often in games, a virtual character is required to interact with objects in the surrounding environment. These interactions can occur in different locations, with different items, often in combination with environment navigation tasks. This results in switching and blending between different motions in order to fit to restrictions due to the position of the character and the interaction circumstances. In this paper, we conduct perceptual experiments to gain knowledge about such interactions and deduce important factors about their design for game animators. Our results identify at what point interaction information is obvious, and which body parts are most important to consider. We find that general information about target position is evident from early on in a combined navigation and manipulation task, and can be deduced from very few visual cues. We also learn that participants are highly sensitive to target positions during the interaction phase, relying mostly on indicators in the motion of the character's arm in the final steps

    Interactive Virtual Humans in Real-TimeVirtual Environments

    No full text
    International audienceIn this paper, we will present an overview of existing research in the vast area of IVH systems. We will also present our ongoing work on improving the expressive capabilities of IVHs. Because of the complexity of interaction, a high level of control is required over the face and body motions of the virtual humans. In order to achieve this, current approaches try to generate face and body motions from a high-level description. Although this indeed allows for a precise control over the movement of the virtual human, it is difficult to generate a natural-looking motion from such a high-level description. Another problem that arises when animating IVHs is that motions are not generated all the time. Therefore a flexible animation scheme is required that ensures a natural posture even when no animation is playing. We will present MIRAnim, our animation engine, which uses a combination of motion synthesis from motion capture and a statistical analysis of prerecorded motion clips. As opposed to existing approaches that create new motions with limited flexibility, our model adapts existing motions, by automatically adding dependent joint motions. This renders the animation more natural, but since our model does not impose any conditions on the input motion, it can be linked easily with existing gesture synthesis techniques for IVHs. Because we use a linear representation for joint orientations, blending and interpolation is done very efficiently, resulting in an animation engine especially suitable for real-time applications

    Emotional communicative body animation for multiple characters

    No full text
    Current body animation systems for Interactive Virtual Humans are mostly procedural or key-frame based. Although such methods provide for a high flexibility of the animation system, often it is not possible to create animations that are as realistic as animations obtained using a motion capture system. Simply using motion captured animation segments in stead of key-framed gestures is not a good solution, since virtual human animation systems also specify parameters of gesture that affect the style, such as for example expressing emotions or stressing a part of a speech sequence. In this paper, we describe an animation system that allows for the synthesis of realistic communicative body motions according to an emotional state, while still retaining the flexibility of procedural gesture synthesis systems. These motions are constructed as a blend of idle motions and gesture animations. Based on an animation specified for only a few joints, automatically and in real-time, the dependent joint motions are calculated. Realistic balance shifts adapted from motion capture data are generated on-the-fly, resulting in a fully controllable body animation, adaptable according to individual characteristics and directly playable on different characters at the same time
    corecore